
Recurrent Neural Network-based error estimator in spectral methods for solving PDEs
Please login to view abstract download link
Neural networks are increasingly being used to approximate solutions of Partial Differential Equations (PDEs) through machine learning approaches [2,5]. Adaptive spectral methods for solving PDEs enable dynamic decisions based on the problem regarding how many basis functions should be chosen to accurately represent the ground-truth solutions. These methods often rely on error evolution to provide better estimates for determining the required number of basis functions. In this contribution, we train a Recurrent Neural Network (RNN) as an error estimator for adaptive spectral algorithms for PDEs. The RNN takes previously made predictions from earlier time steps, generated using a classical numerical integration method, as part of the input and uses this information to predict the magnitude of the error in the current domain region. This, in turn, allows us to determine how many basis functions should be selected. However, training RNNs with backpropagation through time is notoriously difficult due to the so-called Exploding and Vanishing Gradient Problem (EVGP) [4, 6, 7]. We have recently developed a computational framework, based on a combination of random feature networks and Koopman operator theory, that enables the construction of all weights and biases in an RNN without employing gradient-based methods [1]. We use this approach for solving some elliptic PDEs, which can be defined in 1D/2D spatial dimensions and often model problems such as heat conduction or fluid flow. This method helps alleviate EVGP associated with RNNs and connects to Koopman theory, offering deeper insights for analyzing adaptive algorithms where error estimates can vary spatially.